Composing Space: Cinema and Computer Gaming - The Macro-Mise en Scene and Spatial Composition
نویسنده
چکیده
For some time the influence of cinema on computer gaming has been immediately recognizable but recent years have seen a reversal of this movement. Through its short history cinema has presented little notion of an imagined space beyond the defined borders of the cinematic frame. Frames capture a composed image in totality, complete unto itself. Cuts shift between wholly framed spaces. Tracking and panning moves through a series of successive frames, each complete. There is an unvoiced acceptance on the part of viewers that all that is important in a scene will take place within the screen’s frame. But in the 21 century, many of the key aesthetics of audience acceptance and visual understanding of a broader cinematic space derive not from cinema but from computer gaming. A larger, more complex imaginary world composed by an auteur in-space rather than in-frame. The process of designing and producing a 3D game is both aesthetically and practically that of creating a macro-mise en scene containing the entire imaginary world. During game play individual frames will be ‘composed’ by the camera/player but the larger macro-mise en scene remains fully intact and the player/viewer’s awareness of it as a composition is never diminished. This presentation will look at the changing notion of the cinematic imaginary space based on multi-channel sound and computer gaming aesthetics. It will examine sound design in films such as Gus Van Sant’s Elephant under the influence of Doom3 and Half Life2 games as a key example of a new compositional sensibility that privileges a macro-mise en scene space above the specifics of the individual frame and, moreover, constructs for an audience a functional acceptance of a larger directorial composition; one that stretches far beyond the borders of the TV screen. Paper The Macro-Mise en scene. – Sound in Space & the world beyond the frame. Even taking into account the relatively short history of cinema as an art form, there have been remarkably only a handful of paradigm shifts or changes, stylistically or technically, to its means and form. Cinema, for the most part is made and viewed much now as it has ever been. The introduction of synchronous recorded sound and colour are obviously the two most significant of these seminal changes over the past one hundred years but there are other, more recent, shifts that whilst more subtle and less obvious to the average movie-goer have been none the less enormously impactful particularly on our notions of cinematic space. Far less well documented and yet arguably just as significant to our perception of what cinema is and how we perceive our experience of it, is the development of multi-channel sound; what is best known as Surround-Sound. The cinemascope biblical epic, The Robe (1953), was one of the first films to embrace the notion of a tangible and audience-aware space beyond the edges of the frame in more than just an ephemeral or implied way. A space indicated, constructed and conveyed in aural terms relative to the position of the audience in the theatre. The Robe is the first film 1 The distinction of ‘recorded synchronous sound’ rather than simply sound in general is significant because from the very outset of cinema, sound (and particularly music) was a vital and integral part of the cinematic experience, long before there was any means to record it and embed it into the picture. Originally played live from piano or even multi-musician orchestra (and in this case being music chosen locally by the in-house resident musician) it was not long before studios would prescribe specific musical pieces for individual films to be played from piano roll on pneumatically driven and automated player-pianos (pianola). This in effect is the first form of recorded sound for film, simply what was missing, the small next step, was to embed the audio as sound-on-film, physically attached and played back synchronously with the picture. It must also not be forgotten that along with musical scores cinema had a long established tradition and practice of Foley sound effects performed live with the picture to aurally accentuate dramatic and comic moments. Long before sound playback systems it was common for the a cinema to have a ‘Fotoplayer’; a form of pneumatic player-piano that also included a range of percussion sound devices (bells, whistles, drums, cymbals, clappers and buzzers). With this in mind the term ‘Silent Era’ is certainly a misnomer as the experience of seeing a so called ‘silent’ film was far from silent. where a line of dialogue in the screenplay marked as OS (meaning off-screen) is actually heard to emanate from a space ‘off-screen’, outside the borders of the mise en scene frame. This simple predecessor to what is now arguably the cinematic standard, particularly with the widespread take up of surround-sound home theatre and computer gaming systems, is leading directly to a new audience perception of what comprises the imaginary cinematic space. "The end of the image is no longer the edge of the screen. We are completely immersed in a sound universe and feel as if we are actually in the space of the action, because we can hear the action surround us” (Yu, E. 2003). Truppin, in his essay And then there was sound (1992), discusses this idea of off-screen sound and writes that “our tendency to attach a sound to an emitting source in the interests of coherence allows us to accept the existence of that which we cannot see. While the picture of the sound source may not be available the imagination steps in to provide a mental image of this sound source. In this way off-screen sound enlarges the film space beyond the borders of the screen.” (p. 236) Obviously there is a correlation here between spatially placed sound and a desire to create a greater sense of linked spatial/visual realism. Andre Bazin has described the cinematic realism he holds so dear as residing "in the homogeneity of the space". (Bazin, A. 1967 pp 50 footnote). But he was speaking in the specific on the visual mise en scene space, the framed space in deep focus. What about the space defined not by visual elements but by aural ones? How is our definition and understanding of mise en scene as a framing/composition technique challenged by an aural space that is bigger and more realistically holistic than the visual framed mise en scene can ever be? Through traditional sound replication systems and practices, namely monaural sound, the space of a film’s action and events exists at a distance from the viewer, a space beyond the screen which presents itself as window-like to an alternate world. Once sound however is expanded beyond the framed screen the audience is shifted from their traditional role and placed into the film’s environment. No longer does the viewer look through the screen window at rain falling in a removed place onto the characters. They sit in the rain, the same rain falling on the characters. The fundamental positioning and role of spectatorship on the part of viewer’s is shifted and this shift is absolutely central to a large section of gaming genres in the composition of an imaginary 'world' rather than an imaginary scene. The traditional mise en scene privileges the frame contents as of utmost significance. The sound of events occurring or emanating outside the frame serve as a means to repopulate the visual frame, generally by prompting the camera to move. Once the source of these sounds is removed from the frame it is diminished in importance and audience/narrative relevance. Generally this is conducted technically by the sound being mixed down in volume regardless of the sound source’s physical proximity to the viewer/camera’s position in the scene. In multi-channel sound however the camera is not confined to this audience visual clarity because the sound is now able to provide physically and spatially specific information, tied heavily to physical reality, proximity, as well as a viewer-based realism to the scene. This sound can be directly connected and diegetic to the visual aesthetic of the scene but need not be specifically visually connected to the screen framed image. Computer game sound designer Chia Chin Lee explains simply that "The viewers are sharing the same auditory world as the characters on-screen" (Lee CC, 1999) and so many aural elements exist beyond our sight, and subsequently the sight of the characters but none the less can be an integral and specific part of the scene. A great deal of contemporary understanding about the mise en scene comes from the work of Bordwell and Thompson and their book Film Art (Bordwell & Thompson. 1989). They comment that "in most films those shapes (mise en scene elements and contents) also represent a three dimensional space in which action occurs. Since the image is flat the mise en scene must give us cues that will enable us to infer the three dimensionality of the scene." (Bordwell, Thompson. 1989 pg 136) The assumption in this understanding of cinema space, that it is derived via the framed, screen-based mise en scene rather than an immersive gaming one, is that the threedimensional world of the narrative exists beyond, on the other side of, the cinematic frame/screen and subsequently that the viewer does not share the same space as the action. Instead the screen image ‘infers’ three dimensionality based solely on light, dark and shadow. The audience observes from a removed ‘God’ position that is non-diegetic to the scene. Space or a 'world' in this context is ‘representational’ rather than ‘actual’, as it implies that only the action takes place in the cinema space, not the process of viewing. Surround sound technology and the widespread act of spatial placement of sound around the viewer drives a new understanding of the act of spectatorship taking place within that same space. This notion of a surrounding, realistic, aural space is one that is most at home in, and most integral to, contemporary 3D computer gaming, specifically the genre of games dedicated to creating an immersive, personal experience; otherwise known as FirstPerson-Shooter games. In gaming it is the ‘Space’ rather than the ‘Frame’ that is 'composed' by a game designer/director. This comes both aesthetically and practically in terms of the actual process of designing game levels and complete 3D virtual spaces. Subsequently the visual frame in computer gaming is free to move over the more holistic 'composition' without negating or detracting from the importance of compositional elements outside of the visual frame. This I have begun to refer to as the Macro-mise en scene. The audience may be viewing the narrative/story/experience through the hard borders of a visual frame but there is no illusion or pretence of the frame presenting any sense of a totality of vision because of a larger composition constructed for the viewer through spatially specific sound. The viewer accepts that the frame is just a small part of the composed scene, not the scene in it’s entirety. Where this idea clashes with cinema is that conventional film grammar presents us with a cinema sound that is largely representational rather than authentic. Sounds that are specifically necessary to the forward progression of story are delivered in a manner that allows the viewer to ‘accept’ that they exist but not in a way that demands or replicates an aural actuality. Soundscapes undergo a precise process of ‘mixing’ where volume levels of individual sounds are not only precisely set but also controlled over time; fadedin and faded-out with changing relative proportions in relation to other sounds in the scene. As an example, in Psycho (Hitchcock 1960) the sound of the car-horn beeping as the character of Marion attempts to get the attention of the Motel manager (Norman Bates) is a sound that exists in the cinema space to represent the act of Marion demanding the attention of the Motel manager and wanting to get out of the rain. As the camera cuts from Marion in the car to a removed upward angle of the house there is no change in the sound of the horn which continues to play. Its reverberation, echo or spatial location to either the car, the house or indeed the position of the viewer, remains unchanged. The horn sound is simply there to say ‘a horn is sounding with urgency’ rather than to have an actuality of a horn sound in space and time relative to the viewer's position. The car horn is a representational signifier. Computer Games and the new perspective. This notion of sound as a signifier rather than a representation of actuality is distinctly different and opposed to the aural aesthetic and grammar of 3D game sound in defining and imaginary 'world' where sound most often exists in 'actuality' rather than simply as a signifier. Important game sounds will continue regardless of the camera/user's presence. Again the mise en scene, both aural and visual, doesn’t seek to present a totality of vision; there is no endeavour on the part of game creators, nor expectation on the part of game players/viewers, that the framed mise-en-scene is complete. The framed mise en scene is just one small element of a larger compositional space the viewer is engaging with in actuality. In First-Person Shooter (FPS) genre games (key examples being Doom3 (2004), Half Life2 (2004), Quake3 (1999), etc) it is common for the player’s perspective/avatar to encounter another character (known as an NPC, non-player character) who will speak direct address, to camera in film terms, generally conveying important information for the narrative. Because the player/viewer directly controls the visual mise en scene they are free to turn the ‘camera’ (aka: their perspective view) away from the NPC. They can even walk away and explore another part of the space/room/environment. The sound of the NPC’s voice will continue regardless of the ‘camera’ shift but the action of re-framing the mise en scene in no way removes the aural sound of the NPC speaking from the viewers accepted Macro-mise en scene. The movement of the framed mise en scene away from the direct source doesn’t diminish it’s importance to the ‘scene’ as it certainly would in traditional cinematic grammar. Further examples can been seen in games such as Doom3 (and numerous others) which take place in long series of interior winding hallways, corridors and rooms. Here the game narrative, and the viewer/player’s progression through it, is in large part driven and guided by sound attached to specific places and environments. In filmic terms this would simply be called ‘room tone’, the ambient, atmospheric sound of the space based on its size and contents. In gaming however this room tone becomes a crucial element of the spatially based narrative journey. The player/viewer navigates via sound, knows what is happening on the other side of a door by sound, knows if a creature/enemy is about to attack by sound and, in particular, knows the specific direction and trajectory, in relation to their viewer/player position, of all of these via sound placement. 2 The player/viewer will in a case such as this subsequently perform their own direct and proactive sound ‘mix’ on the level, spatial placement and distance of the NPC voice, through reverberation and acoustic effects. This creates an actuality of sound that is mixed and controlled not by imposed and constructed means of post-production choices but by actuality of location and spatial placement. Many games of this genre present a labyrinth like environment where often the only way to know if your avatar is going in the right direction is by the spatial arrangement of sound. As gaming sound designer Lee comments “ In the film world, coherence in sonic language is easier to achieve because the audio track is composed for a predictable, linear medium... in the game world, the audio remains the linear component, while the image becomes dynamic!” (Lee CC, 1999). In other words under the gaming aesthetic, where the framed image is a completely dynamic and constantly (unpredictably) shifting element, it is the surround-sound design that becomes the linear driver of the viewer/players perception of cinematic space rather than the inferred threedimensionality of screen-based light and shadow described by Bordwell and Thompson. The mise en scene frame loses its totality, its completeness and becomes a dynamic portal to a larger composition rather than the composition itself. Where gaming and Cinema meet. Increasingly these notions of spatial awareness and aural specificity cross over from the gaming environment, where they are most at home, to cinema where the tools and technology (home theatre and cinema surround sound systems, Dolby 5.1 etc) are much the standard but still largely under utilised in everything but a superficial way as. As Randy Thom states “What passes for ‘great sound’ in films today is too often merely loud sound” (1999). Yu comments on this spatial actuality through sound bleeding in relation to cinema; “We can also hear not only the sound space the character of the film is in but also what is outside. Sounds with character can get in through a window." (Yu E. 2003) This idea takes the notion of an immersive aural environment one step further. Not only is the audience placed into the same space as the characters and subsequently hear the environment around those characters but, moreover, to hear what the character hears which goes beyond the environment the camera/viewer/characters are placed in and takes in notions of space beyond the one the viewer is immersed in at any given time. Gus Van Sant’s Elephant (2004) presents an cinematic example of a very effective paired relationship between camera technique and surround sound that can be said derives directly from gaming sensibilities of spatial placement and Macro-mise en scene composition of a larger space that the camera is free to range over and embrace spatially specific but non-visual elements. A simple but particularly good example of this correlation in Elephant is that of the scene involving a meeting of a small group of students discussing homosexuality. In this scene sound and vision exist in the same composed space, the same Macro mise en scene, but not necessarily aligned in the same mise en scene frame. The camera moves into the room tracking a particular student who is slightly late for the meeting. As she sits the camera moves to the centre of a circle of seated students and it continues to move, panning slowly around the room, from left to right, taking in a medium close up of each student in succession. The discussion is not a boisterous one, each student (and the teacher), for the most part, is given time to speak and comment in turn whilst the camera continues to move slowly but without pausing. The key difference in the staging of this scene is not the fluid movement of the camera but the fact that the camera is not necessarily (indeed rarely) focused on the person who is actually speaking. The voice of each person is heard, and heard in surround spatial placement relative to their position in the circle (and the viewer), but the camera is independent of this aural composition. It moves across people before they speak, after they speak and indeed on people who don’t speak at all (or at least not at the same time as the camera is on them) Only occasionally is the camera focused on the source of the sound. With this style Van Sant creates a scene where sound and vision exist in the same scene, both are equally prominent to the viewers experience of the scene and the scene’s narrative, but sound and vision do not exist in the same frame. The visual and aural frames are independent or, more accurately, the visual frame shows just one mobile part of the audience’s perceived Macro-mise en scene. Van Sant takes this further in Elephant by using spatially placed sound to construct for the viewer a specific spatial awareness of the larger environment beyond the traditional mise en scene. The narrative of Elephant works in broken non-linear fashion, shifting both forwards and backwards, forcing the viewer to piece together the discontinuous moments in time. Moreover Van Sant shows several scenes a number of times over, each from a different perspective, following a different character or with a different focus. One particular example involves three characters who cross paths in the school hallway with each version following a character into the hallway from a different direction. The sound placement and the sounds themselves are very specific in all three of these scenes and spatially truthful in their replication. In particular in each version there is the sound of a car and its door opening and closing from the school car park. This car park is given an exact location in relation to the hallway through this sound, likewise the music practice room and the distant proximity of the cafeteria where most students are. Elephant follows no particular protagonist but rather an ensemble of individual characters (in many cases with no direct connection with each other) through this particular day in their school lives. Because the film ultimately involves the tragedy of a ‘Columbine’ like shooting at the school, the physical/spatial location of each of these student characters we are following (in many cases literally with a steadi-cam) within the school environment becomes crucial to the narrative and, in particular, to our sense of drama (as we know what’s around the corner – literally and spatially – before they do. This is a long established tenant of the Horror and Thriller genres taken further with surround sound). The above example of the car door placed specifically in the surround sound array outside a certain hallway window becomes important because when we follow the character of John he moves from the hallway outside to the car park where he crosses paths with the two student gunmen entering the school. Each time we see the scene in flash-back from a different (even reverse) perspective, we are reminded that the gunmen are coming from that particular direction in relation to the hallway. From here the viewer, through sound location much like a Doom3 player/viewer navigates by sound, is able to piece together not only the narrative as we traditionally understand it but also the space and the characters’ relationship to space that is so crucial to the drama of the film. Van Sant in Elephant essentially composes a macro-mise en scene, the entire school spatially, aurally and visually, and then manoeuvres a camera through this ‘composition’. For the audience the mise en scene is much larger than the frame and there is no illusion that the frame represents a complete ‘Totality’ of the scene, just a small section of it. This is a definitively game-like approach as the process of designing and ‘composing’ a game is one of creating an entire space where the player/viewer’s personal mise en scene frame will be just one incomplete window on the larger composition; a larger composition the viewer must be aware of as a whole outside of their visible frame. Gaming Co-existence: Long-Take and Montage. The two grand pillar-like extents of cinematic thinking invariably break down to the divergent ideas of cinema’s role as an art form expounded by Sergei Eisenstein on one side, in the form of Montage (i.e. the power of the cut as key cinematic tenant) and Andre Bazin in the realism of the Long-Take (i.e. the power of the un-interrupted shot). Both these perspectives, Formalism in the former and Realism in the later, can be seen as tangible underpinnings of computer gaming design and practice and yet neither analytical tool allows us a complete picture to understand this new sense of the macro-mise en scene as a compositional paradigm. The FPS genre (and much gaming in general) on the surface, seem to embody Bazinian thinking at its most pure; in that the immersive, first-person perspective of the player/viewer’s mise en scene is able to naturally and organically present a continuous ‘long-take’ cinematic style of spatial realism free of the disjointed, constructed nature of montage. And yet this is certainly not to say that FPS genre gaming and the worlds they construct is built around a purely Realist aesthetic or that key ideas of Montage and formalism do not play a significant role. A good example of fused Realist and Formalist aesthetics in gaming can be seen in games such as Star Wars Jedi Academy (2003), Thief: Deadly Shadows (2004) or many, many others like them. Much like any other FPS genre game, the narrative of Jedi Academy revolves around the player/viewer taking on an avatar (in this case a Jedi Knight from the canon of original Star Wars 1977, 1980, 1983 films). From a Bazinian, Realist perspective a large proportion of the game plays out with a realtime basis of long continuous takes. Moreover, added to the cinematic notion of the 'long take', is it’s partner 'deep-focus', the two of which are the cornerstone of much cinema Realist theory expounded by Bazin. Indeed, ideas of depth-of-field and deep-focus are more purely at home in 3D graphics than anywhere else as depth-of-field is a construct that belongs solely to camera lenses. In a camera-less scene made up of computer generated, coloured pixels, there is no such thing as depth-of-field as focus is infinite. No matter how close or far away from the framed perspective, the scene and subjects are always in focus. Potentially, at least in the short term, this complete lack of depth-of-field is problematic for our established cultural knowledge of film grammar and so gaming presents a direct challenge to the established way we see the moving image and view created space. Viewership of cinema is very attached to the notion of a forced depth-of-field as it has been central to our visual understanding of the captured photographic image since the invention of the still image camera. So much so that it is common practice to ‘fake’ or impose depth-of-field effects on 3D animated films in order to engage both with an established notion of what filmic cinema looks like and to invoke a heightened sense of 3 That is of course unless of the game designer prescribes the artificial addition of ‘lens effects’ or synthesized simulations of the effect of cinematographic lenses on the mise en scene. This is in fact a very common practice in 3D animated films and numerous examples can be sighted in films such as Toy Story (1995) and Shrek (2001) where specific lens focal lengths and, in particular, lens flare (or light refraction from a virtual sun) have been artificially added to the scene. Presumably this is done to, somewhat ironically, provoke a greater sense of realism for the viewer who has a long standing visual grammar associating the camera lens with the depiction of live action realism. In 3D gaming, whilst devices such a lens flare are common, depth of field and focal length is almost unheard of. realism with the sub-conscious notion that the animated events have been captured on film as 'live' events rather than constructed through the purely technical process of cell or keyframe-based animation. This extends into gaming as the implication is that if depth of field as a visual tenant is removed from gaming animation we loose a connection with the gaming visuals (which at least for now aren’t totally photo-realistic) as live action elements; subsequently undermining their visual truth. That said, Jedi Knight is arguably far more ‘cinematic’ than it is ‘realist’ in that it employs a wide variety of cinematic devices to deliberately break the continuity of the long-take, immersive, first-person perspective of an imaginary world. These breaks are known in gaming terms as ‘cut scenes’. When the player/viewer completes a set task or game level, the continuity of proactively driven action is broken by a shift in perspective from first to third person. Here a ‘cut-scene’ plays out where the player/viewer is no longer in control of the avatar and indeed no longer viewing the mise en scene through the eyes of that character. Not only does this deliberate shift invoke Montage ideas of shifting and cutting camera angles but very often the virtual camera will move away from the human eye-line to shoot the cut-scene from non-human low and high angles in an instantly recognizable invocation of Hollywood cinema. Taking this idea further Jedi Knight (along with other games such as Max Payne (2001)) continually break from notions of real-time and first-person throughout the proactive play of the game. In Jedi Knight when a light-sabre wielding enemy is defeated, the final killing blow (delivered on the part of the player/viewer’s avatar) is shown from a sweeping third person camera perspective moving in space and slow-motion. This is a purely cinematic and formalist imposed perspective; one that Eisenstein would argue shows cinema’s status as art because of its divergence from perceivable ‘reality’. Likewise Max Payne actually makes these un-realistic cinematic impositions a central part of the game. The player/viewer is able to deliberately enter into slow motion, sweeping camera, diving actions (viewed from a pseudo, over-the-shoulder, third-person and immediately recognizable as stock standard Hollywood moments) as a tactic for winning the game and defeating game enemies. At the other end of the spectrum there is a new perspective entering gaming that seems to be the inevitable evolutionary movement of these notions of an immersive imaginary world. The hugely successful Half Life 2 (2004) presents a game that would appear to represent a Realist perspective of cinematic viewership in its purest form. Half Life 2 is an FPS game much like any other but with one distinct difference; it is the first and (at the time of writing) only FPS game to present the entire game experience, from the moment of starting until game-over, in player controllable first-person. Apart from a simple still graphic shown whilst the game is loading data, Half Life 2 has no cut-scenes of any kind; its game narrative is presented entirely in first-person, real-time. There is only one shift in time that moves the story forward one week but this is not done via a cinematic cut but rather as a part of the game’s narrative (the player/viewer enters a machine that shifts them conveniently forward in time). The effect of this is the playerviewer’s acceptance of the time shift as truthful to the game narrative rather than cinematically imposed. All this would imply that, if it can be argued FPS gaming and the worlds games construct holds at its core an ideal of immersive realism, then Half Life 2 is a game that confirms Bazinian ideas of long-take mise en scene as the best embodiment of cinematic realism. And yet Bazin’s ideas of the mise en scene; "The camera cannot see everything at once, but it makes sure not to lose any part of what it chooses to see.”(1972) do not allow for the new aesthetics of a mise en scene that doesn’t need to see everything. Principally due to Surround sound. As example, the prologue of Half Life 2 is presented solely through the eyes of the character/avatar being controlled by the player/viewer so from the very outset of the game the player is purely immersed pro-actively in this character’s perspective; no cuts, just a single long take. Another character, G-Man, has ‘you’ trapped, drugged and strapped to a chair. But out of kilter with Bazin, and most commonly held notions of the totality of the mise en scene, the player/viewer is in tangible control during this prologue and so is able to look away, re-frame the mise en scene, beyond what would otherwise be the focus and contents of the frame. This prologue is composed as a macro-mise en scene where numerous elements exist, the character of the G-man not least amongst them, in a space shared by the player/viewer. The ‘camera’ in this context isn’t required to frame the G-Man to have him be part of the mise en scene. The space is composed without necessarily requiring direct reference to the visual contents of the frame. With the gaming aesthetic the ‘camera’ doesn't need to see everything or even try to see everything, nor is it restricted to seeing everything that is critical. A Macro-mise en scene has been composed. Future Space There are precious few films that have gone some way towards truly exploiting or engaging with the notion of a Macro-mise en scene, driven by a gaming sensibility of space, and employing surround sound as a creative spatial construct influencing and changing camera technique and our accepted notions of visual framing. Sadly surround sound is still largely limited to explosions and large scale blockbuster effects. It is fair to say that a great deal of the truly inventive and forward thinking use of spatial sound, and indeed the embracement of spatiality and the Macro-mise en scene in general as a compositional tool, is coming from 3D computer gaming. Where computer games have long borrowed aesthetic, cultural and technical influences from popular cinema it is now also fair to say that gaming and gaming aesthetics in regard to the creation and composition of imaginary worlds and spaces are having a profound impact on cinema itself and our perceptions of cinema’s language and form. Statistics alone stand as a substantial and solid indicator of a current and future shift in our perceptions of what popular media is? SpiderMan 2 (2004) grossed $US40.4 million in its opening weekend and was hailed as a huge popular cinema box-office success. In Contrast the release of the FPS genre x-box console game Halo 2 drew $125 million in its first weekend of sales far eclipsing Hollywood’s best efforts. If sheer popularity, public take-up and entertainment saturation can be taken as one substantial indicator, then gaming and this detailed understanding of the composition of space may well be seen as the new dominant visual language and discourse. In this regard the notion of the Macro-mise en scene as a central element of game design, encompassing immersive aural and visual constructs, becomes, by proxy, a central hub of all future media compositional thinking.
منابع مشابه
Diegetic sonic behaviors applied to visual environments, a definition of the hybrid sound object and its implementations
1 Would like to thank all the people that helped this project to happen: To my dear Jiji for all her support, love and the Korean food. To my parents and my brother. 1.6.the noise perspective: Walter Murch and the sonic director 1.7.the video culture 1.8.the phenomenology of montage: Gustav Deutsch .>[First intermission]mise-en-scene and the visual cell 17 a.1.definitions a.2.speed and motion i...
متن کاملبررسی ارتباط میان پیکربندی فضایی و حکمت در معماری اسلامی مساجد مکتب اصفهان (نمونههای موردی: مسجد آقانور، مسجد امام اصفهان و مسجد شیخلطفالله)
Mosques in the history of Islamic civilization are prominent and integral components of Islamic architecture and focal points in Muslim cities. The same is true about Iran that renders them as significant urban landmarks and important spatial linkage elements of the city. In order to find out the significance of Islamic philosophy in Architecture and urban planning, it suffices t...
متن کاملComposing for the (dis)Embodied Ensemble: Notational Systems in (dis)Appearances
This paper explores compositional and notational approaches for working with controllers. The notational systems devised for the composition (dis)Appearances are discussed in depth in an attempt to formulate a new approach to composition using ensembles that navigates a performative space between reality and virtuality.
متن کاملTaking it to the street: screening the advertising film in the Weimar Republic
It would be hard to thinkof amore archetypal scenario of theWeimar ‘street film’ than the opening sequence of Karl Grune’s Die Straße/The Street (1923). Attracted by the protocinematic spectacle of flickering shadows projected onto his ceiling, a hapless middle-class husband flees the drab interiorof his bourgeois apartment for the excitement of the city streets, only to encounter a realm full ...
متن کامل"Wonderful, heavenly, beautiful, and ours": lesbian fantasy and media(ted) desire in Heavenly Creatures.
Peter Jackson's Heavenly Creatures (1994) is the story of two girls in New Zealand who form an intense erotic friendship based on a fantasy world they create, and how their forced separation leads them to commit matricide. Beneath the sensational surface, though, is another story: this is a film about cinema, about desire, and how queer spectators create new and unexpected meanings. This articl...
متن کامل